309 research outputs found

    Perceptual Control Theory for Engagement and Disengagement of Users in Public Spaces

    Get PDF
    This paper presents Perceptual Control Theory-a model that explains behaviour as an attempt to keep sensory inputs in a desired range and demonstrates that it can be used to develop an approach designed to make robots capable of human interaction. In particular, we present an approach that embodies the most salient features of the theory through a feedback loop. This approach has been implemented on a Pepper robot, and a preliminary experiment has been performed by deploying the robot in the entrance hall of a university building. The results show that the robot effectively engages and disengages the attention of people in 43% and 39% of cases, respectively. This result has been obtained in a fully natural setting where people were unaware of being involved in an experiment and therefore behaved spontaneously

    Action Selection for Interaction Management: Opportunities and Lessons for Automated Planning

    Get PDF
    The central problem in automated planning---action selection---is also a primary topic in the dialogue systems research community, however, the nature of research in that community is significantly different from that of planning, with a focus on end-to-end systems and user evaluations. In particular, numerous toolkits are available for developing speech-based dialogue systems that include not only a method for representing states and actions, but also a mechanism for reasoning and selecting the actions, often combined with a technical framework designed to simplify the task of creating end-to-end systems. We contrast this situation with that of automated planning, and argue that the dialogue systems community could benefit from some of the directions adopted by the planning community, and that there also exist opportunities and lessons for automated planning

    From clockwork automata to robot newsreaders

    Get PDF
    No abstract available

    Natural language generation for social robotics: Opportunities and challenges

    Get PDF
    In the increasingly popular and diverse research area of social robotics, the primary goal is to develop robot agents that exhibit socially intelligent behaviour while interacting in a face-to-face context with human partners. An important aspect of face-to-face social conversation is fluent, flexible linguistic interaction: as Bavelas et al. [1] point out, face-to-face dialogue is both the basic form of human communication and the richest and most flexible, combining unrestricted verbal expression with meaningful non-verbal acts such as gestures and facial displays, along with instantaneous, continuous collaboration between the speaker and the listener. In practice, however, most developers of social robots tend not to use the full possibilities of the unrestricted verbal expression afforded by face-to-face conversation; instead, they generally tend to employ relatively simplistic processes for choosing the words for their robots to say. This contrasts with the work carried out Natural Language Generation (NLG), the field of computational linguistics devoted to the automated production of high-quality linguistic content: while this research area is also an active one, in general most effort in NLG is focussed on producing high-quality written text. This article summarises the state-of-the-art in the two individual research areas of social robotics and natural language generation. It then discusses the reasons why so few current social robots make use of more sophisticated generation techniques. Finally, an approach is proposed to bringing some aspects of NLG into social robotics, concentrating on techniques and tools that are most appropriate to the needs of socially interactive robots

    Modulating the Non-Verbal Social Signals of a Humanoid Robot

    Get PDF
    In this demonstration we present a repertoire of social signals generated by the humanoid robot Pepper in the context of the EU-funded project MuMMER. The aim of this research is to provide the robot with the expressive capabilities required to interact with people in real-world public spaces such as shopping malls-and being able to control the non-verbal behaviour of such a robot is key to engaging with humans in an effective way. We propose an approach to modulating the non-verbal social signals of the robot based on systematically varying the amplitude and speed of the joint motions and gathering user evaluations of the resulting gestures. We anticipate that the humans' perception of the robot behaviour will be influenced by these modulations

    Comparing User Responses to Limited and Flexible Interaction in a Conversational Interface

    Get PDF
    The principles governing written communication have been well studied, and well incorporated in interactive computer systems. However, the role of spoken language and in human-computer interaction, while an increasingly popular modality, still needs to be explored further [3]. Evidence suggests that this technology must further evolve in order to support more "natural" conversations [2], and that the use of speech interfaces is correlated with a high cognitive demand and attention [4]. In the context of spoken dialogue systems, a continuum has long been identified between "systeminitiative" interactions, where the system is in complete control of the overall interaction and the user answers a series of prescribed questions, and "user-initiative" interactions, where the user is free to say anything and the system must respond [5]. However, much of the work in this area predates the recent explosive growth of conversational interfaces

    Using General-Purpose Planning for Action Selection in Human-Robot Interaction

    Get PDF
    A central problem in designing and implementing interactive systems—action selection—is also a core research topic in automated planning. While numerous toolkits are available for building end-to-end interactive systems, the tight coupling of representation, reasoning, and technical frameworks found in these toolkits often makes it difficult to compare or change the underlying domain models. In contrast, the automated planning community provides general-purpose representation languages and multiple planning engines that support these languages. We describe our recent work on automated planning for task-based social interaction, using a robot that must interact with multiple humans in a bartending domain

    A Reusable Interaction Management Module: Use case for Empathic Robotic Tutoring

    Get PDF
    We demonstrate the workings of a stochastic Interaction Management and showcase this working as part of a learning environment that includes a robotic tutor who interacts with students, helping them through a pedagogical task

    Evaluating the impact of variation in automatically generated embodied object descriptions

    Get PDF
    Institute for Communicating and Collaborative SystemsThe primary task for any system that aims to automatically generate human-readable output is choice: the input to the system is usually well-specified, but there can be a wide range of options for creating a presentation based on that input. When designing such a system, an important decision is to select which aspects of the output are hard-wired and which allow for dynamic variation. Supporting dynamic choice requires additional representation and processing effort in the system, so it is important to ensure that incorporating variation has a positive effect on the generated output. In this thesis, we concentrate on two types of output generated by a multimodal dialogue system: linguistic descriptions of objects drawn from a database, and conversational facial displays of an embodied talking head. In a series of experiments, we add different types of variation to one of these types of output. The impact of each implementation is then assessed through a user evaluation in which human judges compare outputs generated by the basic version of the system to those generated by the modified version; in some cases, we also use automated metrics to compare the versions of the generated output. This series of implementations and evaluations allows us to address three related issues. First, we explore the circumstances under which users perceive and appreciate variation in generated output. Second, we compare two methods of including variation into the output of a corpus-based generation system. Third, we compare human judgements of output quality to the predictions of a range of automated metrics. The results of the thesis are as follows. The judges generally preferred output that incorporated variation, except for a small number of cases where other aspects of the output obscured it or the variation was not marked. In general, the output of systems that chose the majority option was judged worse than that of systems that chose from a wider range of outputs. However, the results for non-verbal displays were mixed: users mildly preferred agent outputs where the facial displays were generated using stochastic techniques to those where a simple rule was used, but the stochastic facial displays decreased users’ ability to identify contextual tailoring in speech while the rule-based displays did not. Finally, automated metrics based on simple corpus similarity favour generation strategies that do not diverge far from the average corpus examples, which are exactly the strategies that human judges tend to dislike. Automated metrics that measure other properties of the generated output correspond more closely to users’ preferences

    Action Selection for Interaction Management: Opportunities and Lessons for Automated Planning

    Get PDF
    The central problem in automated planning---action selection---is also a primary topic in the dialogue systems research community, however, the nature of research in that community is significantly different from that of planning, with a focus on end-to-end systems and user evaluations. In particular, numerous toolkits are available for developing speech-based dialogue systems that include not only a method for representing states and actions, but also a mechanism for reasoning and selecting the actions, often combined with a technical framework designed to simplify the task of creating end-to-end systems. We contrast this situation with that of automated planning, and argue that the dialogue systems community could benefit from some of the directions adopted by the planning community, and that there also exist opportunities and lessons for automated planning
    • …
    corecore